GitHub

您所在的位置:网站首页 tts windows GitHub

GitHub

2024-07-01 08:08:18| 来源: 网络整理| 查看: 265

TTS: Text-to-Speech for all.

TTS is a library for advanced Text-to-Speech generation. It's built on the latest research, was designed to achieve the best trade-off among ease-of-training, speed and quality. TTS comes with pretrained models, tools for measuring dataset quality and already used in 20+ languages for products and research projects.

CircleCI License PyPI version

📢 English Voice Samples and SoundCloud playlist

👨‍🍳 TTS training recipes

📄 Text-to-Speech paper collection

💬 Where to ask questions

Please use our dedicated channels for questions and discussion. Help is much more valuable if it's shared publicly, so that more people can benefit from it.

Type Platforms 🚨 Bug Reports GitHub Issue Tracker ❔ FAQ TTS/Wiki 🎁 Feature Requests & Ideas GitHub Issue Tracker 👩‍💻 Usage Questions Discourse Forum 🗯 General Discussion Discourse Forum and Matrix Channel 🔗 Links and Resources Type Links 💾 Installation TTS/README.md 👩🏾‍🏫 Tutorials and Examples TTS/Wiki 🚀 Released Models TTS/Wiki 💻 Docker Image Repository by @synesthesiam 🖥️ Demo Server TTS/server 🤖 Running TTS on Terminal TTS/README.md ✨ How to contribute TTS/README.md 🥇 TTS Performance

"Mozilla*" and "Judy*" are our models. Details...

Features High performance Deep Learning models for Text2Speech tasks. Text2Spec models (Tacotron, Tacotron2, Glow-TTS, SpeedySpeech). Speaker Encoder to compute speaker embeddings efficiently. Vocoder models (MelGAN, Multiband-MelGAN, GAN-TTS, ParallelWaveGAN, WaveGrad, WaveRNN) Fast and efficient model training. Detailed training logs on console and Tensorboard. Support for multi-speaker TTS. Efficient Multi-GPUs training. Ability to convert PyTorch models to Tensorflow 2.0 and TFLite for inference. Released models in PyTorch, Tensorflow and TFLite. Tools to curate Text2Speech datasets underdataset_analysis. Demo server for model testing. Notebooks for extensive model benchmarking. Modular (but not too much) code base enabling easy testing for new ideas. Implemented Models Text-to-Spectrogram Tacotron: paper Tacotron2: paper Glow-TTS: paper Speedy-Speech: paper Attention Methods Guided Attention: paper Forward Backward Decoding: paper Graves Attention: paper Double Decoder Consistency: blog Speaker Encoder GE2E: paper Angular Loss: paper Vocoders MelGAN: paper MultiBandMelGAN: paper ParallelWaveGAN: paper GAN-TTS discriminators: paper WaveRNN: origin WaveGrad: paper

You can also help us implement more models. Some TTS related work can be found here.

Install TTS

TTS supports python >= 3.6, tts --text "Text for TTS" \ --model_name "///" \ --vocoder_name "///" \ --out_path folder/to/save/output/

Run your own TTS model (Using Griffin-Lim Vocoder)

tts --text "Text for TTS" \ --model_path path/to/model.pth.tar \ --config_path path/to/config.json \ --out_path output/path/speech.wav

Run your own TTS and Vocoder models

tts --text "Text for TTS" \ --model_path path/to/config.json \ --config_path path/to/model.pth.tar \ --out_path output/path/speech.wav \ --vocoder_path path/to/vocoder.pth.tar \ --vocoder_config_path path/to/vocoder_config.json

Note: You can use ./TTS/bin/synthesize.py if you prefer running tts from the TTS project folder.

Example: Training and Fine-tuning LJ-Speech Dataset

Here you can find a CoLab notebook for a hands-on example, training LJSpeech. Or you can manually follow the guideline below.

To start with, split metadata.csv into train and validation subsets respectively metadata_train.csv and metadata_val.csv. Note that for text-to-speech, validation performance might be misleading since the loss value does not directly measure the voice quality to the human ear and it also does not measure the attention module performance. Therefore, running the model with new sentences and listening to the results is the best way to go.

shuf metadata.csv > metadata_shuf.csv head -n 12000 metadata_shuf.csv > metadata_train.csv tail -n 1100 metadata_shuf.csv > metadata_val.csv

To train a new model, you need to define your own config.json to define model details, trainin configuration and more (check the examples). Then call the corressponding train script.

For instance, in order to train a tacotron or tacotron2 model on LJSpeech dataset, follow these steps.

python TTS/bin/train_tacotron.py --config_path TTS/tts/configs/config.json

To fine-tune a model, use --restore_path.

python TTS/bin/train_tacotron.py --config_path TTS/tts/configs/config.json --restore_path /path/to/your/model.pth.tar

To continue an old training run, use --continue_path.

python TTS/bin/train_tacotron.py --continue_path /path/to/your/run_folder/

For multi-GPU training, call distribute.py. It runs any provided train script in multi-GPU setting.

CUDA_VISIBLE_DEVICES="0,1,4" python TTS/bin/distribute.py --script train_tacotron.py --config_path TTS/tts/configs/config.json

Each run creates a new output folder accomodating used config.json, model checkpoints and tensorboard logs.

In case of any error or intercepted execution, if there is no checkpoint yet under the output folder, the whole folder is going to be removed.

You can also enjoy Tensorboard, if you point Tensorboard argument--logdir to the experiment folder.

Contribution Guidelines

This repository is governed by Mozilla's code of conduct and etiquette guidelines. For more details, please read the Mozilla Community Participation Guidelines.

Create a new branch. Implement your changes. (if applicable) Add Google Style docstrings. (if applicable) Implement a test case under tests folder. (Optional but Prefered) Run tests. ./run_tests.sh Run the linter. pip install pylint cardboardlint cardboardlinter --refspec master Send a PR to dev branch, explain what the change is about. Let us discuss until we make it perfect :). We merge it to the dev branch once things look good.

Feel free to ping us at any step you need help using our communication channels.

Collaborative Experimentation Guide

If you like to use TTS to try a new idea and like to share your experiments with the community, we urge you to use the following guideline for a better collaboration. (If you have an idea for better collaboration, let us know)

Create a new branch. Open an issue pointing your branch. Explain your idea and experiment. Share your results regularly. (Tensorboard log files, audio results, visuals etc.) Major TODOs Implement the model. Generate human-like speech on LJSpeech dataset. Generate human-like speech on a different dataset (Nancy) (TWEB). Train TTS with r=1 successfully. Enable process based distributed training. Similar to (https://github.com/fastai/imagenet-fast/). Adapting Neural Vocoder. TTS works with WaveRNN and ParallelWaveGAN (https://github.com/erogol/WaveRNN and https://github.com/erogol/ParallelWaveGAN) Multi-speaker embedding. Model optimization (model export, model pruning etc.) Acknowledgement https://github.com/keithito/tacotron (Dataset pre-processing) https://github.com/r9y9/tacotron_pytorch (Initial Tacotron architecture) https://github.com/kan-bayashi/ParallelWaveGAN (vocoder library) https://github.com/jaywalnut310/glow-tts (Original Glow-TTS implementation) https://github.com/fatchord/WaveRNN/ (Original WaveRNN implementation)


【本文地址】

公司简介

联系我们

今日新闻


点击排行

实验室常用的仪器、试剂和
说到实验室常用到的东西,主要就分为仪器、试剂和耗
不用再找了,全球10大实验
01、赛默飞世尔科技(热电)Thermo Fisher Scientif
三代水柜的量产巅峰T-72坦
作者:寞寒最近,西边闹腾挺大,本来小寞以为忙完这
通风柜跟实验室通风系统有
说到通风柜跟实验室通风,不少人都纠结二者到底是不
集消毒杀菌、烘干收纳为一
厨房是家里细菌较多的地方,潮湿的环境、没有完全密
实验室设备之全钢实验台如
全钢实验台是实验室家具中较为重要的家具之一,很多

推荐新闻


图片新闻

实验室药品柜的特性有哪些
实验室药品柜是实验室家具的重要组成部分之一,主要
小学科学实验中有哪些教学
计算机 计算器 一般 打孔器 打气筒 仪器车 显微镜
实验室各种仪器原理动图讲
1.紫外分光光谱UV分析原理:吸收紫外光能量,引起分
高中化学常见仪器及实验装
1、可加热仪器:2、计量仪器:(1)仪器A的名称:量
微生物操作主要设备和器具
今天盘点一下微生物操作主要设备和器具,别嫌我啰嗦
浅谈通风柜使用基本常识
 众所周知,通风柜功能中最主要的就是排气功能。在

专题文章

    CopyRight 2018-2019 实验室设备网 版权所有 win10的实时保护怎么永久关闭